TREC 2007 ciQA Track at RMIT and CSIRO

نویسندگان

  • Mingfang Wu
  • Andrew Turpin
  • Falk Scholer
  • Yohannes Tsegay
  • Ross Wilkinson
چکیده

The only difference between all our system runs lies in the way in which the query was constructed. In our baseline system run, rmitrun1, we used those words within the brackets embedded in a question template as a query. In our second system run, rmitrun2, we added some additional words into the query; namely, those words form the narrative field of a question topic, except the introduction part (such as "The analyst would like to know of"). These words give elaborated information about what is counted as an answer. For the purpose of exploring the second research question, we would like to get an answer list from rmitrun2 that is sufficiently different from the one from rmitrun1, but not too dramatically dissimilar. We therefore experimented with giving different weights to the two sets of words from the title and narrative fields. Our submitted run used equal weight for the words from two fields.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Persuasive, Authorative and Topical Answers for Complex Question Answering

The ciqa track investigates the role of interaction in answering complex questions: questions that relate two or more entities by some specified relationship. As in the ciqa 2006, our interest in ciqa 2007 was on contextual factors that may affect how answers are assessed. In ciqa 2006 we investigated factors such as topical knowledge or confidence in assessing answers through direct questionin...

متن کامل

UMass Complex Interactive Question Answering (ciQA) 2007: Human Performance as Question Answerers

Every day, people widely use information retrieval (IR) systems to answer their questions. We utilized the TREC 2007 complex, interactive question answering (ciQA) track to measure the performance of humans using an interactive IR system to answer questions. Using our IR system, assessors searched for relevant documents and recorded answers to their questions. We submitted the assessors’ answer...

متن کامل

Overview of the TREC 2006 Question Answering Track 99

The TREC 2006 question answering (QA) track contained two tasks: the main task and the complex, interactive question answering (ciQA) task. As in 2005, the main task consisted of series of factoid, list, and “Other” questions organized around a set of targets; in contrast to previous years, the evaluation of factoid and list responses distinguished between answers that were globally correct (wi...

متن کامل

RMIT at the TREC 2016 LiveQA Track

This paper describes the four systems RMIT fielded for the TREC 2015 LiveQA task and the associated experiments. The challenge results show that the base run RMIT-0 has achieved an above-average performance, but other attempted improvements have all resulted in decreased retrieval effectiveness. Keywords-TREC LiveQA 2015; RMIT; passage retrieval; summarization; query trimming; headword expansion

متن کامل

RMIT University at TREC 2004

RMIT University participated in two tracks at TREC 2004: Terabyte and Genomics, both for the first time. This paper describes the techniques we applied and our experiments in both tracks, and discusses the results of the genomics track runs; the terabyte track results are unavailable at the time of manuscript submission. We also describe our new zettair search engine, in use for the first time ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007